biological plausibility
Biological plausibility of C1-C3 and φ
We thank all reviewers for their comments. We will correct all typos and address all minor comments in the final paper. Other criticisms for BP (spiking and recurrent neural networks) remain? This work was published on 7 Jun 2020 (after the NeurIPS'20 It includes another approximation to BP but no equivalence; we will add it to the related work. Please see line 8 in the response to Reviewer #1.
Learning Internal Biological Neuron Parameters and Complexity-Based Encoding for Improved Spiking Neural Networks Performance
Rudnicka, Zofia, Szczepanski, Janusz, Pregowska, Agnieszka
This study introduces a novel approach by replacing the traditional perceptron neuron model with a biologically inspired probabilistic meta neuron, where the internal neuron parameters are jointly learned, leading to improved classification accuracy of spiking neural networks (SNNs). To validate this innovation, we implement and compare two SNN architectures: one based on standard leaky integrate-and-fire (LIF) neurons and another utilizing the proposed probabilistic meta neuron model. As a second key contribution, we present a new biologically inspired classification framework that uniquely integrates SNNs with Lempel-Ziv complexity (LZC) a measure closely related to entropy rate. By combining the temporal precision and biological plausibility of SNNs with the capacity of LZC to capture structural regularity, the proposed approach enables efficient and interpretable classification of spatiotemporal neural data, an aspect not addressed in existing works. We consider learning algorithms such as backpropagation, spike-timing-dependent plasticity (STDP), and the Tempotron learning rule. To explore neural dynamics, we use Poisson processes to model neuronal spike trains, a well-established method for simulating the stochastic firing behavior of biological neurons. Our results reveal that depending on the training method, the classifier's efficiency can improve by up to 11.00%, highlighting the advantage of learning additional neuron parameters beyond the traditional focus on weighted inputs alone.
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Europe > Poland > Masovia Province > Warsaw (0.04)
The GAIN Model: A Nature-Inspired Neural Network Framework Based on an Adaptation of the Izhikevich Model
The GAIN Model: A Nature - Inspired Neural Network Framework Based on an Adaptation of the Izhikevich Model Gage K. R. Hooper Independent Researcher Future Aerospace Engineering Student, Embry - Riddle Aeronautical University May 3 1, 2025 1 Abstract While many neural networks focus on layers to process information, the GAIN model uses a grid - based structure to improve biological plausibility and the dynamics of the model. The grid structure helps neurons to interact with their closest neighbors and im prove their connections with one another, which is seen in biological neurons. While also being implemented with the Izhikevich model this approach allows for a computationally efficient and biologically accurate simulation that can aid in the development of neural networks, large scale simulations, and the development in the neuroscience field. This adaptation of the Izhikevich model can improve the dynamics and accuracy of the model, allowing for its uses to be specialized but efficient. Early models of SSNs, such as the Hodgkin - Huxley model (1952), were detailed and capable of replicating the exact dynamics of neuronal spiking, considering every ion channel, but it was too computationally inefficie nt. A computational model that can simulate the function of neurons. The activation of neurons determined by its action potential when a neuron's difference between interior and exterior voltages (membrane potential) rapidly increases and decreases. In response to limitations seen in these models, Eugene Izhikevich (2003) introduced a spiking neural network model, achieving a balance between biological plausibility and computational efficiency (See Appendix A). The Izhikevich model can reproduce neuron behaviors while remaining computationally lightweight, resulting in it being widely adopted for large - scale simulations.
- Health & Medicine > Therapeutic Area > Neurology (0.66)
- Aerospace & Defense (0.54)
Bio-Inspired Mamba: Temporal Locality and Bioplausible Learning in Selective State Space Models
This paper introduces Bio-Inspired Mamba (BIM), a novel online learning framework for selective state space models that integrates biological learning principles with the Mamba architecture. BIM combines Real-Time Recurrent Learning (RTRL) with Spike-Timing-Dependent Plasticity (STDP)-like local learning rules, addressing the challenges of temporal locality and biological plausibility in training spiking neural networks. Our approach leverages the inherent connection between backpropagation through time and STDP, offering a computationally efficient alternative that maintains the ability to capture long-range dependencies. We evaluate BIM on language modeling, speech recognition, and biomedical signal analysis tasks, demonstrating competitive performance against traditional methods while adhering to biological learning principles. Results show improved energy efficiency and potential for neuromorphic hardware implementation. BIM not only advances the field of biologically plausible machine learning but also provides insights into the mechanisms of temporal information processing in biological neural networks.
- North America > United States > Massachusetts (0.04)
- Europe > Germany > North Rhine-Westphalia > Cologne Region > Bonn (0.04)
- Asia > Singapore > Central Region > Singapore (0.04)
- Asia > China (0.04)
- Health & Medicine > Therapeutic Area > Cardiology/Vascular Diseases (0.95)
- Health & Medicine > Therapeutic Area > Neurology (0.94)
- Education (0.78)
On the biological plausibility of orthogonal initialisation for solving gradient instability in deep neural networks
Manchev, Nikolay, Spratling, Michael
Initialising the synaptic weights of artificial neural networks (ANNs) with orthogonal matrices is known to alleviate vanishing and exploding gradient problems. A major objection against such initialisation schemes is that they are deemed biologically implausible as they mandate factorization techniques that are difficult to attribute to a neurobiological process. This paper presents two initialisation schemes that allow a network to naturally evolve its weights to form orthogonal matrices, provides theoretical analysis that pre-training orthogonalisation always converges, and empirically confirms that the proposed schemes outperform randomly initialised recurrent and feedforward networks.
- North America > United States > Pennsylvania > Allegheny County > Pittsburgh (0.04)
- North America > United States > California > San Diego County > San Diego (0.04)
- Asia > Middle East > Lebanon (0.04)
The whole brain architecture approach: Accelerating the development of artificial general intelligence by referring to the brain
The vastness of the design space created by the combination of a large number of computational mechanisms, including machine learning, is an obstacle to creating an artificial general intelligence (AGI). Brain-inspired AGI development, in other words, cutting down the design space to look more like a biological brain, which is an existing model of a general intelligence, is a promising plan for solving this problem. However, it is difficult for an individual to design a software program that corresponds to the entire brain because the neuroscientific data required to understand the architecture of the brain are extensive and complicated. The whole-brain architecture approach divides the brain-inspired AGI development process into the task of designing the brain reference architecture (BRA) -- the flow of information and the diagram of corresponding components -- and the task of developing each component using the BRA. This is called BRA-driven development. Another difficulty lies in the extraction of the operating principles necessary for reproducing the cognitive-behavioral function of the brain from neuroscience data. Therefore, this study proposes the Structure-constrained Interface Decomposition (SCID) method, which is a hypothesis-building method for creating a hypothetical component diagram consistent with neuroscientific findings. The application of this approach has begun for building various regions of the brain. Moving forward, we will examine methods of evaluating the biological plausibility of brain-inspired software. This evaluation will also be used to prioritize different computational mechanisms, which should be merged, associated with the same regions of the brain.
- Asia > Japan > Honshū > Kantō > Tokyo Metropolis Prefecture > Tokyo (0.04)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- (2 more...)
- Overview (0.46)
- Research Report (0.40)
Continual Weight Updates and Convolutional Architectures for Equilibrium Propagation
Ernoult, Maxence, Grollier, Julie, Querlioz, Damien, Bengio, Yoshua, Scellier, Benjamin
Equilibrium Propagation (EP) is a biologically inspired alternative algorithm to backpropagation (BP) for training neural networks. It applies to RNNs fed by a static input x that settle to a steady state, such as Hopfield networks. EP is similar to BP in that in the second phase of training, an error signal propagates backwards in the layers of the network, but contrary to BP, the learning rule of EP is spatially local. Nonetheless, EP suffers from two major limitations. On the one hand, due to its formulation in terms of real-time dynamics, EP entails long simulation times, which limits its applicability to practical tasks. On the other hand, the biological plausibility of EP is limited by the fact that its learning rule is not local in time: the synapse update is performed after the dynamics of the second phase have converged and requires information of the first phase that is no longer available physically. Our work addresses these two issues and aims at widening the spectrum of EP from standard machine learning models to more bio-realistic neural networks. First, we propose a discrete-time formulation of EP which enables to simplify equations, speed up training and extend EP to CNNs. Our CNN model achieves the best performance ever reported on MNIST with EP. Using the same discrete-time formulation, we introduce Continual Equilibrium Propagation (C-EP): the weights of the network are adjusted continually in the second phase of training using local information in space and time. We show that in the limit of slow changes of synaptic strengths and small nudging, C-EP is equivalent to BPTT (Theorem 1). We numerically demonstrate Theorem 1 and C-EP training on MNIST and generalize it to the bio-realistic situation of a neural network with asymmetric connections between neurons.
Backprop Diffusion is Biologically Plausible
Betti, Alessandro, Gori, Marco
The Backpropagation algorithm relies on the abstraction of using a neural model that gets rid of the notion of time, since the input is mapped instantaneously to the output. In this paper, we cl aim that this abstraction of ignoring time, along with the abrupt inp ut changes that occur when feeding the training set, are in fact the reas ons why, in some papers, Backprop biological plausibility is regarded as an arguable issue. We show that as soon as a deep feedforward network oper ates with neurons with time-delayed response, the backprop weig ht update turns out to be the basic equation of a biologically plausibl e diffusion process based on forward-backward waves. We also show that s uch a process very well approximates the gradient for inputs that are not too fast with respect to the depth of the network. These remarks s omewhat disclose the diffusion process behind the backprop equation and leads us to interpret the corresponding algorithm as a degenerati on of a more general diffusion process that takes place also in neural net works with cyclic connections.